I was able to answer my own question after much brainstorming and apparently the solution was very simple. Since /home/spidey/sopon3/rda-aof/ has been configured as the directory to serve the files that can be accessible using just my-devdomain.com/data-file.pdf, all I had to do was create another directory inside /rda-aof and put my files there. So now the url looks like this: my-devdomain.com/public/data-file.pdf. With this, I was able to configure spring security to allow /public/** without any authentication.
fixed by below:
// Connect the bot service to Microsoft Teams
resource botServiceMsTeamsChannel 'Microsoft.BotService/botServices/channels@2022-09-15' = {
parent: botService
location: 'global'
name: 'MsTeamsChannel'
properties: {
channelName: 'MsTeamsChannel'
properties: {
acceptedTerms: true
callingWebhook: 'https://${botAppDomain}/api/callback'
deploymentEnvironment: 'CommercialDeployment'
enableCalling: true
// incomingCallRoute: 'https://${botAppDomain}/api/callback'
isEnabled: true
}
}
}
I would suggest to follow below document link
https://abp.io/docs/latest/framework/api-development/dynamic-csharp-clients
Removing the box-sizing line for textarea worked for me (or at least replacing box-sizing: border-box; by box-sizing: content-box; )
export JAVA_HOME="/Applications/Android Studio.app/Contents/jbr/Contents/Home" still getting error set project gradle jdk as GRADLE_LOCAL_JAVA_HOME
this works:
d %>%
gtsummary::tbl_summary(
data = .,
include = -id,
label = list(
inf_1 ~ paste(attr(d$inf_1, "label"), paste0("(", attr(d$inf_1, "units"), ")")),
inf_2 ~ attr(d$inf_2, "label")
),
type = list(all_continuous() ~ "continuous2"),
statistic = list(
all_continuous() ~ c("{median} ({p25}, {p75})", "{min}, {max}"),
all_categorical() ~ "{n} / {N} ({p}%)"
)
) %>%
gtsummary::as_gt()
This is unrelated to Docker itself. It's tied to a template file within the Kafka image provided by Confluent: kafka.properties.template. This template is processed by the configure script when the container starts, where the env variables are actually used to build the configuration (kafka.properties) file before starting Kafka itself.
19607914763000190MD. Ibrahim Sikd6 Aug 1960Right IndexA2.0302c0214745783a0d9900553f97d787d930cd35944f64ed7021417c8a5243873c65c22a71371f733f3a164a3f844
visit here to solvessssssssssssssssssssssssssssssss : https://youtu.be/rGFuak8kdRo
For me it was intuitive to simply type in the input box using Chrome and hopefully the answer would be accepted. But you have to select your typed in words below the input box. This may appear to be a bug related to the input box. So make sure you click on the blue section below the input box for your selection. Tried all of the above and they did not work.
BeautifulSoup is just a parser as it retrieves the static HTML content from the server and can't handle JavaScript-rendered content, while Selenium can because it emulates a browser.
Use Localxpose.io , check out this tutorial: https://colab.research.google.com/drive/1CvsmJMH00Cli2K2OQJQYWFG-eNzGSuKl?usp=sharing
!pip install loclx-colab
import loclx_colab.loclx as lx
port = 8787 # The service port that you want to expose
access_token = "Your_Token_Here" # Your LocalXpose token here
url = lx.http_tunnel_start(port, access_token)
if url:
print(f"Your service is exposed to this URL: https://{url}")
https://github.com/kubernetes-sigs/controller-tools GO11MODULE=on go install sigs.k8s.io/controller-tools/cmd/[email protected]
latest controller-tools can fix it
Where can I get "vendor"?
You don't need to pass an id in for the associated entity role. It will get one automatically after it gets created. Then you can get it for test purposes with UserEntityRole.last
Nothing worked for me but this
val builder = AlertDialog.Builder(context,android.R.style.ThemeOverlay_DeviceDefault_Accent_DayNight)
This will cover the screen even you have a small layout.
This has been fixed in Doxygen version 1.10.0. See https://github.com/doxygen/doxygen/issues/7688 for more info.
I think if you aren;t finding good answers for your question any where you should ask it from gpt by asking it like : "tell me everything about [topic] in an easy to understand language " .It will really provide you with a detailed explanation and further you can make modifications to it also .
You question is very generic.
To read:
# Read a file from the workspace
with open("/dbfs/workspace/<folder>/<file>.txt", "r") as file:
content = file.read()
print(content)
To write:
# Write a file to the workspace
with open("/dbfs/workspace/<folder>/<file>.txt", "w") as file:
file.write("This is a test file.")
Sometime I use dbutils API, here is some examples:
# Write a file to the workspace
dbutils.fs.put("workspace:/shared_folder/example.txt", "This is a test file.")
# Read the file
content = dbutils.fs.head("workspace:/shared_folder/example.txt")
print(content)
Lets me know if above is not working, I will help more. Cheers
For profiling add this envs in your docker-compose.yml
environment:
SPX_ENABLED: 1
SPX_AUTO_START: 0
SPX_REPORT: full
For viewing profiles use some server with php-fpm for example.
You can use services like https://localxpose.io/, and it is free. This is a full tutorial. https://colab.research.google.com/drive/1CvsmJMH00Cli2K2OQJQYWFG-eNzGSuKl?usp=sharing
here is the attempt with Spannable, text does not change???
fun getAllMeds(): List<Medication> {
val medList = mutableListOf<Medication>()
val db = readableDatabase
val query = "SELECT * FROM $TABLE_NAME"
val cursor = db.rawQuery(query, null)
while (cursor.moveToNext()) {
val id = cursor.getInt(cursor.getColumnIndexOrThrow(COLUMN_ID))
val medpill = cursor.getString(cursor.getColumnIndexOrThrow(COLUMN_MEDPILL))
val medtaken = cursor.getString(cursor.getColumnIndexOrThrow(COLUMN_MEDTAKEN))
val spannable = SpannableString("Take (" + medpill + ") pill every " + medtaken)
spannable.setSpan(
ForegroundColorSpan(Color.RED),
6, // start
9, // end
Spannable.SPAN_EXCLUSIVE_INCLUSIVE
)
var newText = spannable.toString()
val med = Medication(id, newText)
medList.add(med)
}
cursor.close()
db.close()
return medList
}
I found a package called google_sign_in_all_platforms, that can handle google sign-in across all platforms 🎉.
I also want to implement this how can I do this basically I want to use users local storage for this is it possible through it.
Option: 1.Using firebase deeplink 2. localstorage (I want to go with this localstorage).
I found a package that supports Google Sign-In for all platforms including Windows and Linux. It is called google_sign_in_all_platforms. I have been using it for quite a while, and it works like a charm.
I've recently encountered an issue after manually deleting SDK 30.0.1 and then re-downloading the same version. Despite following the usual steps, I seem to be facing some challenges:
I deleted SDK 30.0.1 manually from my system.
I re-downloaded SDK 30.0.1 toolkit and attempted to set it up again.
However, I'm running into problems that I wasn't expecting. Could someone guide me on what might be going wrong or what additional steps I should take to ensure a smooth reinstallation?
Thanks in advance for your help!
I've also encountered this in FireStore and simulating a CREATE. It turns out you also need to specify the ID when POSTing to the collection sample simulation
It’s generally not a good idea to emulate features from other languages in Rust. When you create a boxed trait object, you incur two types of overhead: 1. Pointer indirection via the Box, which stores the value on the heap. 2. Dynamic dispatch through the vtable to resolve the method call.
so it’s best to avoid it unless absolutely necessary.
Additionally, when you box a type T, you’re moving it to the heap, which means that T cannot have references, because after moving something on heap, rust cannot guarantee that referenced value will outlive the T, so this operation is not allowed in safe rust. As a result, if your iterator implementations contain references to other data, they cannot be boxed, as this would lead violate rust's safety guarantees.
I think when exist 1 cut (separate graph to multiple connected component) have multiple light edges. So we can choose any edge of them and put it to min span tree
Try changing in the ODBC from client SQL to ODBC driver for SQL server
If you pass the date as a string it creates the date in UTC.
const t1 = new Date(2024,11,12) // 11 because month starts at 0
// -> '2024-12-11T23:00:00.000Z' (I am in UTC+1)
const t2 = new Date("2024-12-12")
// '2024-12-12T00:00:00.000Z'
Just came across this post searching for an issue I have. Does anyone know what‘s the behavior on iOS with PWA‘s added to „Home Screen“? I would suppose the code still stops working after PWA goes to background. My issue is that when I re-open the PWA, updates -which happened while in background- are not being displayed Appreciate any ideas!
What's wrong in your code:
You can update your logic so that you can get expected output.
static void printPascal(int row, int column, int rowLimit) {
for (; row < rowLimit; ) {
System.out.println("(" + row + ", " + column + ")");
if (column < row) {
column++; // Move to the next column in the current row
} else {
column = 0; // Reset column for the next row
row++; // Move to the next row
}
}
}
You need to expose a property that will represent the image URI and ensure it notifies the UI when it changes.
Add HeldPieceImageUri as a property with INotifyPropertyChanged to ensure the UI updates when the image changes.
Don’t forget to update your WPF XAML to include an Image control to preview the held piece. Also make sure the TetrisViewModel is set as the DataContext of your Window.
As mentioned above, I didn't manage to encapsulate desired icon files into my executable to later access them with relative paths from my script. However there seems to be a way around as PyInstaller has no issues in attaching icon to the executable file itself. Afterwards I just read and decode icon from the executable file. Thanks to this post: How to extract 32x32 icon bitmap data from EXE and convert it into a PIL Image object?
My final script looks next:
import sys
import win32api
import win32con
import win32gui
import win32ui
from PySide6.QtCore import Qt
from PySide6.QtGui import QImage, QPixmap
from PySide6.QtWidgets import QApplication, QMainWindow, QLabel
def extract_icon_from_exe(exe_path):
"""Extracts the icon from an executable and converts it to a QPixmap with transparency."""
# Get system icon size
ico_x = win32api.GetSystemMetrics(win32con.SM_CXICON)
ico_y = win32api.GetSystemMetrics(win32con.SM_CYICON)
# Extract the large icon from the executable
large, small = win32gui.ExtractIconEx(exe_path, 0)
if not large:
raise RuntimeError("Failed to extract icon.")
hicon = large[0] # Handle to the large icon
# Create a compatible device context (DC) and bitmap
hdc = win32ui.CreateDCFromHandle(win32gui.GetDC(0))
mem_dc = hdc.CreateCompatibleDC()
hbmp = win32ui.CreateBitmap()
hbmp.CreateCompatibleBitmap(hdc, ico_x, ico_y)
mem_dc.SelectObject(hbmp)
# Draw the icon onto the bitmap
mem_dc.DrawIcon((0, 0), hicon)
# Retrieve the bitmap info and bits
bmpinfo = hbmp.GetInfo()
bmpstr = hbmp.GetBitmapBits(True)
# Convert to a QImage with transparency (ARGB format)
image = QImage(bmpstr, bmpinfo["bmWidth"], bmpinfo["bmHeight"], QImage.Format_ARGB32)
# Clean up resources
win32gui.DestroyIcon(hicon)
mem_dc.DeleteDC()
hdc.DeleteDC()
return QPixmap.fromImage(image)
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Hello World Application")
label = QLabel("Hello, World!", self)
label.setAlignment(Qt.AlignmentFlag.AlignCenter)
self.setWindowIcon(extract_icon_from_exe(sys.executable))
self.setCentralWidget(label)
if __name__ == "__main__":
app = QApplication(sys.argv)
window = MainWindow()
window.resize(400, 300)
window.show()
sys.exit(app.exec())
TestApp.spec:
a = Analysis(
['test.py'],
pathex=[],
binaries=[],
datas=[('my_u2net', 'my_u2net')],
hiddenimports=[],
hookspath=[],
hooksconfig={},
runtime_hooks=[],
excludes=[],
noarchive=False,
optimize=0,
)
pyz = PYZ(a.pure)
exe = EXE(
pyz,
a.scripts,
a.binaries,
a.datas,
[],
name='TestApp',
debug=False,
bootloader_ignore_signals=False,
strip=False,
upx=True,
upx_exclude=[],
runtime_tmpdir=None,
console=False,
disable_windowed_traceback=False,
argv_emulation=False,
target_arch=None,
codesign_identity=None,
entitlements_file=None,
icon=['app_icon.ico'],
)
@Tejzeratul: Sure. The sad fact is, that I don't know yet how to setup HTTPS in dev mode and also don't want to bother with certificates etc. while still developping. It is a different thing to set up HTTPS on a production server, but my dev machine is not even reachable from the internet. @nneonneo: Thank you very much! I immediately tried out ngrok, and I immediately ran into CSRF problems: The login form was posted to http://...ngrok-free-app, and the request origin was https:/...ngrok-free-app, so node_modules/@sveltejs/kit/src/runtime/server/respond.js throw a "Cross-site POST form submissions are forbidden" error. After trying more elegant approaches, I switched off CSRF protection. See above, I added a fourth step.
SELECT TRIM (TRAILING '"' FROM Category)--, TRIM (LEADING '"' FROM Category) FROM Content
UPDATE Content SET Category = TRIM (TRAILING '"' FROM Category)
UPDATE Content SET Category = TRIM (LEADING '"' FROM Category)
Here CATEGORY is the column name, and CONTENT is the table
If you've tried this and many other methods, but it still complains about "symbol not found...", you may have missed one last step before you break your computer. I've been trying to mess with dependencies for days, but nothing has happened, except for new errors. If you're at this point, but haven't tried to Invalidate caches and restarting your projects, give it a try. This is the only thing that worked for me.
Java 21.
Your database must have that collection already have been created for the first time. Simply just import that model (no need to use if don't needed), mongoose will create that collection for you in database.
This one works for me.
servers:
server1,
server2,
server3
Apparently it was a network issue. I made both the S3 bucket and Redshift cluster, publicly accessible, and the COPY command execute successfully in a few minutes.
My problem was in not using source for my LineupSerializer. After adding this the problem got solved and the serialised had access to all objects, including foreign keys in my models:
class LineupSerializer(serializers.ModelSerializer):
players = LineupPlayerSerializer(source='lineup',many=True)
For me, it was necessary to use the official Nuxt extension in VScode!
Even after installing the official extension, I still received the same "error" message.
So I deleted the entire project and installed it again to reconfigure all the tsconfig.json files and the others!
@axandce 's answer does what is expected, but I have to also clarify a little on his commands used. Instead of using poetry config certificates.pythonhosted.org false, one have to use poetry config certificates.pythonhosted.cert false instead - I have tried it on my machine.
HACK ON.zip 1 Cannot delete output file : errno=13 : Permission denied : /storage/emulated/0/Android/data/com.dts.freefireth/files/il2cpp/Metadata/global-metadata.dat
Were you able to solve this problem? I am facing a similar when trying to start the installation process of wordpress using the API Gatway URL.
Either use asyncpg or psycopg 3.2.3 or any other relevant, because psycopg2 does not support async operation as mentioned in there official document.
pnpm-lock.yaml.node_modules.Yes, if you need consistency between node_modules and pnpm-lock.yaml, especially in workspaces or deployments.
Proper Installation: Run:
pnpm i
Clear and Reinstall:
rm -rf node_modules
pnpm i
Validate Lockfile:
rm -rf node_modules pnpm-lock.yaml
pnpm i
Check Workspace Configs: Ensure pnpm-workspace.yaml and lockfile are up to date, then run:
pnpm i
I have found the answer. I used the following code to get this done
// **Handling Checkboxes (last question) separately**
// Fetch checkbox values and filter them based on the options available in the form. var form = FormApp.openById('1gFmmKPZ72O3l1hl93_rxhXwezPVqxNvGISEi7wnDP_o'); // Form ID var checkboxesItem = form.getItems(FormApp.ItemType.CHECKBOX)[0].asCheckboxItem();
My guess is to check in on_modified() if isDirectory is true or not
ecm's comment is absolutely right - if I write section .text without the colon it works fine and prints Result: 0. A totally silent "error" until the program is run.
In my situation, I have two separate projects under the solution. The problem was that these projects were targeting different CPU architectures. You can fix this by changing your projects to target the same CPU architecture.
how to make this but with both functions have "f" key?
@Danish Javed have you fixed the issue yet?
I wrote a blog post on this. The gist of the article are three possible fixes:
attribution-reporting DirectiveIf your application does not rely on attribution-reporting, simply remove it from the Permissions-Policy header in your server or hosting configuration.
If you intend to use attribution-reporting, ensure that your app consider cross-browser quirks. Check for browser support using req.headers['user-agent'] and conditionally add the header:
const userAgent = req.headers['user-agent'];
if (userAgent.includes('Chrome/')) {
res.setHeader("Permissions-Policy", "attribution-reporting=()");
}
If the header is being added by a dependency (e.g., a library or hosting provider), update the dependency or override its configuration. If you're using Vercel, you might want to use a vercel.json file:
{
"headers": [
{
"source": "/(.*)",
"headers": [
{
"key": "Permissions-Policy",
"value": "geolocation=(), microphone=()"
}
]
}
]
}
Please try to upload a another file , like JPG or PDF , with MultipartFile and check it out again .
You need to change the "Editor: Default Color Decorators" to "always".
Check this link: https://forums.developer.apple.com/forums/thread/17181 You can get your answer there.
You should JSON.parse() the talkjs value.
const respose = {
data: {
talkjs: "{\"message\":{\"id\":\"msg_303fpzqsELNIYT6udk6A52\",\"text\":\"hello\"}}"
}
};
const talkjs = JSON.parse(respose.data.talkjs);
console.log(talkjs.message.text)
Try to use the absolute path in the redirect function.
Example:
redirect('http://localhost:3000/app')
Look a this example: https://godbolt.org/z/1TYco6xM1
The compiler will store the variable, if volatile, after each modify and read it back again.
If your ISRs are non-concurrent, then you can get away with not making it volatile, since code would not get preempted. The access will be basically atomic.
That said, I would say, if you work not alone on this project, make it volatile.
The speed impact will be small, as well as the memory footprint. And most important, if the variable will, at some point, be used at other places as well, you will not have have less issues with concurrent access.
You can also try Flee Calc, this works as native controller and you can also debug code in runtime: https://github.com/mparlak/Flee
That tutorial is outdated already. No need for 'start' command anymore, just run the app and call the 'acv snap ' command (checkout the readme in the acvtool repository).
in onUserBlock(), you need to return the result of onCompanies()
I done all this after that also I am facing there was an error while performing this operation
a=sh.cell(row=i,column=1).value a is not defined here . Error
I am no expert with AWS but I once had a similar issue in which the following URL helped me. https://repost.aws/questions/QURxK3sj5URbCQ8U2REZt7ow/images-not-showing-in-angular-application-on-amplify
We did move our images in S3 but the solution of modifying amplify.yml seems a possible way to fix your issue.
Hope this helps fix your issue.
The main problem that I had was android:name line in the AndroidManifest.xml was placed wrong
I keep getting that same error for the code.
start_cord_df1 <- df1 %>% st_as_sf(coords = c("start_lng", "start_lat "))
rundll32.exe user32.dll,LockWorkStation
if you use the swing library you can just use the :
setMnemonic()
but how? suppose you have an JMenu in swing, look at the below:
JMenu setting = new JMenu("setting");
setting.setMnemonic('s');
it makes the first letter underline. hope this useful for you.
just restart visual studio code
I need to move those markers from one position to another. How to do that? @DonMag
I don't know if you are new to embedded coding, but your code is missing a lot, maybe you should start all over again, You can go with online tutorials on youtube.
These are many enterprises require for tracking GitHub Copilot generated code, in the situation that you are in the enterprise registered GitHub Copilot for Business or Enterprise, you will have few APIs to cover in the organization/team level, not for individual because it is privacy issue.
Metrics API: https://docs.github.com/en/rest/copilot/copilot-metrics?apiVersion=2022-11-28
- date
- total_active_users
- total_engaged_users
- copilot_ide_code_completions
- total_engaged_users
- languages
- name
- total_engaged_users
- editors
- name
- total_engaged_users
- models
- name
- is_custom_model
- custom_model_training_date
- total_engaged_users
- languages
- name
- total_engaged_users
- total_code_suggestions
- total_code_acceptances
- total_code_lines_suggested
- total_code_lines_accepted
- copilot_ide_chat
- total_engaged_users
- editors
- name
- total_engaged_users
- models
- name
- is_custom_model
- custom_model_training_date
- total_engaged_users
- total_chats
- total_chat_insertion_events
- total_chat_copy_events
- copilot_dotcom_chat
- total_engaged_users
- models
- name
- is_custom_model
- custom_model_training_date
- total_engaged_users
- total_chats
- copilot_dotcom_pull_requests
- total_engaged_users
- repositories
- name
- total_engaged_users
- models
- name
- is_custom_model
- custom_model_training_date
- total_pr_summaries_created
- total_engaged_users
Usage API: https://docs.github.com/en/rest/copilot/copilot-usage?apiVersion=2022-11-28
- day
- total_suggestions_count
- total_acceptances_count
- total_lines_suggested
- total_lines_accepted
- total_active_users
- total_chat_acceptances
- total_chat_turns
- total_active_chat_users
- breakdown
- language
- editor
- suggestions_count
- acceptances_count
- lines_suggested
- lines_accepted
- active_users
For Metrics API, it has repository metrics tracked but only for PR summaries, not for every code generated in the IDE/Editor side. If you would like to looking at more details, you may need to build a forward proxy, where nginx can do TLS Inspection of tracking any package sent through client and GitHub API, as well as any telemetry of VSCode when you are coding for a workspace associated with repository...
To playground before doing that you can take a look at Fiddler to develop any https body part to be tracked, I have my similar answer here you can try with Fiddler initially Why Github Copilot network request not appeared in Visual Studio Code Developer Tools?
I resolved this issue by migrating to oAuth2
I would simply say, that Sankar's approach checks only, if the value of each element of array a[] equals the value of the next higher index number. It does not check for all other probable values in a[].
ALTER TABLE table_name CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
side note, you can obviate the hassle of using the system hostname command by using the vim.uv.os_gethostname() function.
you have to first login firebase from you console. You can see the steps from here
Not sure what are you trying to achieve.
Most browser do have set limits. Safari on iOS truncates URLs longer than a 4096 characters when displayed in the address bar. This truncation does not affect the actual URL used by the browser for network requests or JavaScript logic.
If you are worried about truncation, than you can break it into multiple parameters or use post request
In our project the root cause was because we used the e2e.js file to add hookers and utils functions. For every test when importing the utils functions the hooks got imported as well. Since e2e.js is imported by default, the hooks got executed twice.
If the same applies to you, just move the utils functions out of e2e.js, update the imports in the tests and keep only the hooks in e2e.js file.
If using two different DataTables with each having different columns, errors may occur, and related data for both tables may display as null.
Have you changed the APP_URL in the .env file?
Try resting the config path and it should refresh and resolve the issue.
If not create empty file at the path or check for permissions
Install or update the FlutterFire CLI:
dart pub global activate flutterfire_cli
//Simple sorting algorithm to sort linked list data structure
public void sortLinkedListDS() {
Node cur = front;
Node prev;
int temp = 0; // to hold prev data
for (; cur != null;) {
prev = cur.next;
for (; prev != null;) {
if (cur.data > prev.data) {
temp = cur.data;
cur.data = prev.data;
prev.data = temp;
}
prev = prev.next;
}
cur = cur.next;
}
}
I managed to fix this crash by wrapping the init body in a Task:
init() {
Task {
UNUserNotificationCenter.current().requestAuthorization(options: [.alert, .badge, .sound]) { (success, error) in }
}
}
The app still crashes, but at a different point, so this is 'fixed'.
For those who like we got exactly the same error when trying to deploy a Flask (Dash) application on AWS elastic beanstalk - you need to set the WSGIPath in Configuration/Updates, monitoring, and logging to application:server.
The filename in my case is application.py and definitions in code are:
application=dash.Dash(__name__)
server=application.server
Empty View Activity is useful when you want full control over the layout, typically for apps with unique designs or those that require custom functionality.
Basic Activity provides a starting point for standard apps, offering common UI components and simplified navigation, reducing the need for heavy customization.
Here is the Snippet to achieve toggling CupertinoSegmentedControl programmatically.
where sliderkey is given to CupertinoSegmentedControl widget.
void toggle(int index) {
final x = (sliderKey.currentContext!.findRenderObject() as RenderBox).size.width;
(sliderKey.currentState as dynamic)?.onTapUp(TapUpDetails(
kind: PointerDeviceKind.touch,
localPosition: Offset(index * (x / (widget.children.length)), (widget.children.length + 1))));
}
Current options are either a static badge which ends up looking like
or a dynamic badge that you'll need to point to your repo, and only kinda does what you want, e.g.
I just changed the channel to stable and it works :)
Thanks, @Tsyvarev. I should have installed Boost and set env var to this installation folder, for example:
b2 --prefix=c:\Dev\Boost_1_87_0_build install
Может кому пригодится мой код, нужно было именно для такого же варианта форматирования. Пришлось сделать много улучшений, чтобы добиться идеального результата.
from PIL import Image
# QR-код
qr_text = """
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
"""
# Разделяем текст на строки и определяем размеры
lines = qr_text.strip().split('\n')
height = len(lines)
width = max(len(line) for line in lines)
# Создание изображения с удвоенной высотой
img_height = height * 2
img_width = width
img = Image.new('1', (img_width, img_height), 1) # Заполняем изображение белым цветом
# Заполнение изображения
for y_text, line in enumerate(lines):
for x_text, char in enumerate(line):
# Обрабатываем '1'
if char == '1':
if x_text < img_width:
if 2 * y_text < img_height:
img.putpixel((x_text, 2 * y_text), 0)
if 2 * y_text + 1 < img_height:
img.putpixel((x_text, 2 * y_text + 1), 0)
# Обрабатываем пробелы
elif char == ' ':
if x_text > 0 and x_text < len(line) - 1 and line[x_text - 1] == '1' and line[x_text + 1] == '1':
# Одиночный пробел между '1' - заполняем черным
if x_text < img_width:
if 2 * y_text < img_height:
img.putpixel((x_text, 2 * y_text), 0)
if 2 * y_text + 1 < img_height:
img.putpixel((x_text, 2 * y_text + 1), 0)
else:
# Обычный пробел - заполняем белым
if x_text < img_width:
if 2 * y_text < img_height:
img.putpixel((x_text, 2 * y_text), 1)
if 2 * y_text + 1 < img_height:
img.putpixel((x_text, 2 * y_text + 1), 1)
# Коррекция: однократное растягивание черных пикселей вправо
for y in range(img_height):
pixels_to_change = []
for x in range(img_width - 1):
if img.getpixel((x, y)) == 0 and img.getpixel((x + 1, y)) == 1:
pixels_to_change.append((x + 1, y))
for x, y_coord in pixels_to_change:
img.putpixel((x, y_coord), 0)
# Добавление вертикального ряда справа
new_img_width = img_width + 1
new_img = Image.new('1', (new_img_width, img_height), 1)
# Копирование пикселей из старого изображения
for x in range(img_width):
for y in range(img_height):
new_img.putpixel((x, y), img.getpixel((x, y)))
# Копирование пикселей из последнего столбца старого изображения в новый столбец
for y in range(img_height):
new_img.putpixel((img_width, y), img.getpixel((img_width - 1, y)))
new_img.save('qr_code.png')
new_img.show()
Im not sure but Im going mad, seems pretty clear from the hords or articles I've read that MICS controls when the BULK INSERT should commit the transaction causing TLOG not to grow uncontrollably.
But in my scenario I have
In OLEDB Destination I have tried all kinds of combinations of
Rows_Per_Batch and MICS but nothing is giving me the desired result of my TLOG not growing like crazy and BULK INSERT inserting all 11,5M rows in ONE BIG TRANSACTION.
My TLOG grows to 43GB which is now giving me issues on the PROD server as it runs out of space.
I assume a configuration of OLEDB Destination with:
Rows_Per_Batch = 11 500 000 should help optimizer know how much data the bulk insert is expected to handle (I have tried 10000 as well same result, TLOG grows).
MICS = 100000 (This setting seems to do nothing, in Profiler I see the BULK INSERT query but it is missing the BATCHSIZE option which should control the commit)
It seems obvious the MICS should be the anser but anything I tried I cant get it to not do entire load in one big transaction.
Any clarification on where in going wrong would be very welcome.
To plan the most efficient route for patio lights under a patio cover, measure the structure's dimensions and decide on the lighting design (e.g., perimeter or crisscross). Use hooks or clips designed for the cover material to secure the lights, ensuring even spacing. Opt for energy-efficient bulbs like LEDs for durability and connect them to a weatherproof outdoor outlet.
If I run it once, it may return 40 results. If I go to the "next page" of results (increment the start parameter by 10), it may say 49 results... or 21 results... It's all over the place.
I believe that google indexes might have been updated during that time and start (offset) has been reset.
google don't have definite fixed time when they update indexes, they constantly update their indexes https://support.google.com/websearch/answer/12412910?hl=en
To fix the issue, use 'file.file.seek(0)' to return the cursor back to the first line. After reading through, the content of the file, the cursor position is now set to the last line.
Try setting this flag to False: CheckLatency=N
From Quickfix the documentation:
If set to Y, messages must be received from the counterparty within a defined number of seconds (see MaxLatency). It is useful to turn this off if a system uses localtime for it's timestamps instead of GMT.
Original permissions for include directory:
sudo ls -l /usr/local/
drwxr-x---@ 4 root wheel 128B Aug 1 18:18 include
Modifying directory permission to below worked for me on Mac
sudo chmod -R 755 /usr/local/include