You can solve this by enabling the Direction API. Currently, the only way to do this is by going to this link (because its in Legacy).
For some reason, some of your code might still be using the old Places API. You can activate it by going to this link (this is currently the only way as its in Legacy mode), which should fix your issue.
create file vite.config.ts allowedHosts: ["domain"]
A pretty simple answer to this question is
final Map json = {
"key1": {"key": [1,2,3]},
"key2": {"key": [4,5,7]},
"key3": {"key": [8,9,10]},
};
final firstEntry = json.entries.toList()[1].value;
print(firstEntry);
I wanted to accomplish this same thing, and @Veedrac got pretty close but I did not want quotes around my floats and I also wanted to be able to control the amount of precision. In order to do this I had to use the decimal library as well as the simplejson json dumps implementation (in order to get the use_decimal functionality). Hopefully this will help someone else:
from decimal import Decimal, ROUND_DOWN
import simplejson as sjson
def json_dumps_decimal(data, precision=6):
def recursive_converter(obj):
if isinstance(obj, dict):
return {key: recursive_converter(value) for key, value in obj.items()}
elif isinstance(obj, list):
return [recursive_converter(item) for item in obj]
elif isinstance(obj, float):
decimal_obj = Decimal(obj)
return decimal_obj.quantize(Decimal('1e-{0}'.format(precision)), rounding=ROUND_DOWN)
return obj
return sjson.dumps(recursive_converter(data), use_decimal=True)
Calling this is as follows will yield the following output:
data = {"dictresults": {"val1": 1000, "val2": 1000, "val3": 0.0000012}, "listresults": [0.000034, 0.0, 0.00001], 'flatresult': 0.00000123456}
jsonstr = json_dumps_decimal(data)
print(jsonstr)
{"dictresults": {"val1": 1000, "val2": 1000, "val3": 0.000001}, "listresults": [0.000033, 0.000000, 0.000010], "flatresult": 0.000001}
Based on @CouchDeveloper and your own reply, you can create/add those global function overloads possibly to keep the same ergonomics:
func autoreleasepool<Result>(_ perform: @escaping () async throws -> Result) async throws -> Result {
try await Task {
try await perform()
}.value
}
func autoreleasepool<Result>(_ perform: @escaping () async -> Result) async -> Result {
await Task {
await perform()
}.value
}
Fixed : Thanks to everyone for their help. The correct regex was
/(.*?)Player "(.*)" \(DEAD\)\ \(id=(.*) pos=<(.*)>\)\[HP: 0\] hit by Player "(.*)" \(id=(.*)\ pos=<(.*)>\) into (.*)\((.*)\) for(.*)damage (.*) with (.*) from (.*)\s*(.*) | Player "(.*)" \(DEAD\) (id=(.*)) (.*) killed by Player/
I ran the above on a larger log file and the returned data was as below
[0] => 22:09:04 | Player "GigglingCobra52" (DEAD) (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1658.8, 15056.8, 451.4>)[HP: 0] hit by Player "Cogito8434" (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1659.4, 14990.8, 441.4>) into Head(0) for 36.7336 damage (Bullet_308WinTracer) with M70 Tundra from 66.7822 meters
22:09:04 | Player "GigglingCobra52" (DEAD) (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1658.8, 15056.8, 451.4>) killed by Player "Cogito8434" (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1659.4, 14990.8, 441.4>) with M70 Tundra from 66.7822 meters
[1] => 22:12:08 | Player "Cogito8434" (DEAD) (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1656.3, 15053.1, 444.8>)[HP: 0] hit by Player "GigglingCobra52" (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1654.7, 15052.8, 444.8>) into Torso(12) for 29.0213 damage (Bullet_556x45) with M4-A1 from 1.57712 meters
22:12:08 | Player "Cogito8434" (DEAD) (id=26DE70CAEF00AE579AF8CA21ED3F1648DB316F1E pos=<1656.3, 15053.1, 444.8>) killed by Player "GigglingCobra52" (id=BE2ABB8084EEC781014AC8E6B5C88A5A90F855BF pos=<1654.7, 15052.8, 444.8>) with M4-A1 from 1.57712 meters
I can now use the returned array data and pull info I need. Thanks again for everyone's help in guiding me how to figure this out
I am also having the same issue, the Remote build successed but the function app is not appearing. How did you fix the issue?
how did you resolve the error mentioned above?
What could be causing this error in LibreOffice?
Your approach (directly modifying the entity within a transaction) is more efficient, simpler, and cleaner. On the other hand, your colleague’s approach is redundant in most cases, though useful when enforcing strict separation between data layers or when using immutable DTOs for business logic transformations. Why?
In Spring Boot with Hibernate, entities are managed within the persistence context when retrieved inside a transaction. Any changes made to an entity will be automatically persisted at the end of the transaction without needing an explicit save(), thanks to dirty checking.
Your colleague's approach (entity → DTO → modify DTO → DTO → entity → save) results in unnecessary object creation and extra processing, leading to increased CPU and memory usage.
Using EAGER fetching by default can lead to unnecessary data being loaded, especially for deeply nested objects, increasing query execution time.
Your approach with LAZY fetching ensures that only the necessary data is loaded, improving efficiency.
For API responses: DTOs help avoid exposing entity structures directly.
For projections or transformations: If the data needs to be reshaped, DTOs are a good choice.
For external system integrations: When working with external APIs, DTOs provide a stable contract.
However, for modifying and persisting entities, DTOs should not be mandatory unless:
The DTO encapsulates fields that shouldn’t be directly mapped to an entity (e.g., some computed fields).
The update process involves a separate layer of validation or business logic that should not interact directly with entities.
Your approach should be significantly faster because it avoids unnecessary object transformations and repo.save().
The only scenario where converting to a DTO and back might be useful is when performing complex updates involving multiple entities, ensuring that only modified fields are applied in a controlled manner.
As rightly suggested , please provide the minimal reproducible example.
However , to get you started : One of the core issue is, await page.locator().all() returns an array of promises so If you directly iterate over jobs, the locators resolve their properties asynchronously when accessed.
Instead of this:
const jobs = await page.locator('[data-testid="job-item-title"]').all();
I would suggest to use `.evaluateAll()` to directly fetch the job links instead of locators:
const jobs = await page.locator('[data-testid="job-item-title"]').evaluateAll(nodes => nodes.map(n => n.href));
The problem was related with the permissions of the security group that was involved. Once that was fixed I could make this work, even without the trick of the tunnel
Did you ever get this working Joshua Graham? I am running my application from as a package, therefore the wwwroot is read only and caching does not work:
no such file or directory, mkdir '/home/site/wwwroot/yyyy/standalone/apps/yyyy/.next/cache'
Adjust the typeargument in the for eahc properties panel.
As a result, I use the LayoutUpdated method and in it the command:
if (chromiumWebBrowser.IsBrowserInitialized)
{
chromiumWebBrowser.GetBrowser().GetHost().Invalidate(PaintElementType.View);
}
This gives the best result and the browser is resized correctly.
## GITHUB SOLUTION
- git config --global http.postBuffer 1048576000
- git config --global http.lowSpeedLimit 0
- git config --global http.lowSpeedTime 999999
MySQL server starts with root privilege, check /etc/passwd for root's shell. If it's /usr/local/bin/bash that doesn't exist in the file system, change it to other shell that exists.
Personally to me, workflow managers (WFM) have most value for automating repetitive tasks where adjustment of parameters or user intervention is basically never necessary. Good examples are mentioned read mapping, or sorting of files, anything that just runs some tools by default settings. Or situations where things are tedious, for example a number of intermediate files need to be created, collated, ordered in some way so downstream tools can run.
If it comes down to run a couple of one-liners on the command line then the added effort of implementing this in a WFM might be overkill.
not any more.. unless you wish to rely on community
all_links = await page.locator("locator copied from html").all()
for link in all_links:
text = await link.inner_text()
print(text)
Using "selector" not "xPath"
I got the "socket hang up" error after upgrading to .NET 8 and changing the Docker image. The problem was an incorrect port mapping in my `docker-compose.yml`. I had `ports: 8005:80`, which worked before the upgrade. However, the new .NET 8 images default to port 8080. Changing my `docker-compose.yml` to `ports: 8005:8080` fixed the issue.
**Key takeaway:** Ensure your `docker-compose.yml` port mapping matches the port your app listens on inside the container.
There were 2 things wrong:
1. I needed (foreignKeys.length > 0) && instead of foreignKeys &&
2. I needed Object.values(foreignKeys[0]).map instead of foreignKeys[0].values.map
[Note: using foreignKeys[0].values.map was a suggestion from CoPilot and it looked strange to me at the time it was suggested, but I thought to myself, "well, it's coming from an AI so it must be correct." Lesson learned!!
I know it´s a very old thread but, perhaps, my answer can help other people with the same issue.
In my case, it was hapenning because the solution was not really running. Due to inativity, IIS pool was put inert. In order to correct the things I just had to perform an access through the browser (or some tool as postman) to the application.
After that, just reload the apps and everything should work.
Try opening your workspace so that your app is the root with the <properties_folder>/launchSettings.json one level down from the app.
You can define your own custom resource and use it from context prometheus.run_gauge.labels(...).set(1)
from dagster import InitResourceContext
from dagster_prometheus import PrometheusResource
from prometheus_client import Gauge
from pydantic import PrivateAttr
class CustomPrometheusResource(PrometheusResource):
_run_gauge: Gauge = PrivateAttr(default=None)
def setup_for_execution(self, context: InitResourceContext) -> None:
super().setup_for_execution(context)
self._run_counter = Gauge(
name='dagster_run_gauge',
documentation='Status of Dagster runs',
labelnames=['job_name', 'status', 'cluster'],
registry=self._registry
)
@property
def run_gauge(self):
return self._run_gauge
it would be good to atleast have fully working published set of data .Currently some things work in one API but when response data is used for another we get 404.Its very hard to even test a full cycle in development. PRD data option is not good when for example we cant even test booking process
I was able to do with `dask.array`.
import dask.array as da
import numpy as np
coords = ...
dims = ...
var_name = 'value'
chunks = (1, 13, 36, 128, 128)
encoding = {var_name: {'chunks': chunks}}
store = 'test.zarr'
daskarray = da.empty(
(6, 13, 36, 699, 1920),
chunks=chunks,
dtype='float32',
)
daskarray[:] = np.nan
xr.DataArray(
daskarray,
coords=coords,
dims=dims,
).to_dataset(name=var_name).to_zarr(store, mode='w', encoding=encoding)
@tsegismont posted a comment that a answer for Vert.x httpClient/webClient process response chunk by chunk or as stream is still up to date and HttpClient should be used when HTTP streaming must be connected with RecordParser. It means second solution from the question is preferred:
RecordParser parser = RecordParser.newDelimited("\n", h -> log.info("r={}", h.toString()));
client
.request(HttpMethod.GET, sut.actualPort(), "localhost", "/stream?file=stream1.txt")
.compose(HttpClientRequest::send)
.onComplete(
ar -> {
if (ar.succeeded()) {
HttpClientResponse response = ar.result();
response.handler(parser);
response.endHandler(e -> ctx.completeNow());
} else {
ctx.failNow(ar.cause());
}
});
Ideally PR for Vert.x documentation should clarify it.
Use onEndEditing instead of onBlur if you are dealing with multiple TextInput components.
Check the preinstalled software on the default microsoft hosted images; let's see what's on the the current windows-latest for example:
you'll notice that both WSL and Docker are installed. But there are at least 2 problems for you:
at the bottom of the page you can see that the cached docker images are all windows based
the WSL installed is version 1, incompatible with recent versions of virutalization environments
You could try to setup a step that updates WSL to v2 and then a command to switch to linux based virtualization for docker. But that's gonna cost you pipeline execution time, and given the particularly pricy costs of Azure, i'd suggest another way;
You could try and use the new windows-2025 image for this pipeline, that comes preinstalled with Docker and WSLv2 as default;
Theoretically this new image has been created specifically to resolve your problem, which is in line with Micro$oft moving towards the windows server + containerized linux services model
All right, here's a potential workaround, although I don't know if this did it or something else was responsible, so I'd like to keep the question active. For me, what I did was make a new emulator for the latest version of Android and then test it in Meerkat. Then I went back and tried the emulators in Meerkat (that I had previously made in Giraffe) and they seemed to work now. Not sure if coincidence or related, would still probably be interested in any potential feedback on this.
I've solved the issue by creating a new folder in my desktop and assigning it as my derived folder in Xcode > Settings > Locations > DerivedData and select custom and set your Path
Tadaa.... !
I found it! You had to add a condition to tell it that if it finds 2 or more identical objects, you have to delete the relationship and recreate one.
I'm closing the discussion. Have a nice day.
public function updateUser($object){
$userToChange=$object->users()->first();
if (empty($userToChange)) {
$object->users()->save(User::where('id',Auth::user()->getAuthIdentifier())->first());
}elseif($object->users()->where('user_id', $userToChange->id)->count() >= 1){ //add condition to detect if several identical objects exists
$object->users()->detach($userToChange->id);
$object->users()->save(User::where('id',Auth::user()->getAuthIdentifier())->first());
}else {
if(Auth::user()->getAuthIdentifier()!=$userToChange->id){
$object->users()->sync(User::where('id',Auth::user()->getAuthIdentifier())->first());
}
}
}
This could mainly be due to a path error that your python interpreter may be running on a separate path from where you are downloading the package. If possible i would recommend using anaconda navigator or jupyter notebook as they each offer you the way to create an environment that will have your dependencies
Ey I am facing the same error now using Saprk Structured streaming to read from kafka and sink in an Iceberg table, Did you fix it?
This is a lifesaver! Thanks
I spent an entire DAY not getting the event to fire and it was the DTR and RTS issue!
So I tried to reproduce your problem by making a Snack on expo, and I reached the same results, I'm not able to render the proper fontFamily inside the canvas, which I think is a thing of the webview inside the react-native-canvas package. You cannot render the font family because expo-font does not reach the webview context.
Also looking for the pull requests of the canvas repo, I saw a contributor trying to implement a Font API, but it didn't merge: https://github.com/iddan/react-native-canvas/pull/294.
Just to my conclusion, the plugin of expo-font doesn't have the flexibility to add fonts inside webview contexts, at least doing it this way.
This is the code that I made to reproduce this issue:
import { Text, SafeAreaView, StyleSheet } from 'react-native';
import { useEffect } from 'react';
import { useFonts } from 'expo-font';
import Canvas from 'react-native-canvas';
const WaitFont = (props) => {
const [loaded, error] = useFonts({
'eightBit': require('./assets/fonts/eightBit.ttf'),
});
useEffect(() => {
if (loaded || error) {
console.log('Working?')
}
}, [loaded, error]);
if (!loaded && !error) {
return null;
}
return props.children
}
export default function App() {
const handleCanvas = (canvas) => {
if (!canvas) return;
const ctx = canvas.getContext('2d');
ctx.font = '500 26px "eightBit"';
ctx.fillText("Hello from <Canvas />", 8, 28) ;
};
return (
<SafeAreaView style={styles.container}>
<WaitFont>
<Text style={styles.paragraph}>
{`Hello from <Text />`}
</Text>
<Canvas style={styles.canvas} ref={handleCanvas} />
</WaitFont>
</SafeAreaView>
);
}
const styles = StyleSheet.create({
container: {
flex: 1,
flexDirection: 'column',
justifyContent: 'center',
backgroundColor: '#ecf0f1',
padding: 8,
},
paragraph: {
fontSize: 50,
fontWeight: '500',
fontFamily: 'eightBit'
},
canvas: {
borderWidth: 1,
borderColor: 'red',
width: '100%'
}
});
When you use an expression template and you also want a theme, you should use a template theme.
The following configuration should work:
{
"Name":"Console",
"Args":{
"formatter":{
"type":"Serilog.Templates.ExpressionTemplate, Serilog.Expressions",
"template":"{@t:HH:mm:ss.fff zzz} | {@l:u3} | {@m}\n{@x}",
"theme":"Serilog.Templates.Themes.TemplateTheme::Code, Serilog.Expressions"
}
}
}
Thanks again for this answer. I modified your regex a bit in order to work correctly with the libreoffice regex function. I added .* twice, at the beginning and at the end of the regex.
With blah blahTESTblab la14blah-15S reblain cell A1 :
=REGEX(A1;"(?i)(TEST)(?:.*(\d{1,2}S)|)";"★$1$2") gives blah blah★TEST15S rebla=REGEX(A1;"(?i).*(TEST)(?:.*(\d{1,2}S)|).*";"★$1$2") gives ★TEST15SThe below way is working
URL queryURL = getClass().getClassLoader().getResource("db.mongo.query/" + this.name + ".json");
String query = new String(Files.readAllBytes(Paths.get(queryURL.toURI())));
@meshack-pi This fixed for me, that was exactly my issue, dependnecy on the HttpClientModule. Thanks for your answer.
If anyone else has a similar issue, what I found worked for me is either
or
There maybe some fiddly workarounds possible with setting global TextEncoders and TextDecoders that I've seen suggested but this didn't work for me.
I have a similar problem. I haven't completely solved it yet, but I have discovered some interesting findings. Maybe this will help someone. I'm glad you found the cause in editor.autoIndent. But it still doesn't solve the problem for me. So, I'll leave the information here. In my case, the “indentNextLinePattern” is not working.
1 - It is strange that you expect a single “increaseIndentPattern” to work. My experiments and reading the information showed that it doesn't work alone. It needs a pair of “decreaseIndentPattern” to work. Even stranger is that you wrote “Then I discovered that I had ‘editor.autoIndent’ in the settings.json file: “none”, which was the problem.” - So just by enabling “editor.autoIndent”, you got the single “increaseIndentPattern” rule working? Without its “decreaseIndentPattern” pair? That's very strange.
2 - Expressions like “if (3 > 2) {” have no meaning here. The thing is that VS Code automatically adds indents if you press “enter” after an opening bracket. This works even if no separate automatic indentation rule is defined for the language. It works for brackets (, [, {.
3 - That said, in my case, the “increaseIndentPattern+decreaseIndentPattern” and “onEnterRules” options work. But “indentNextLinePattern” does not work. That's the weirdness.
4 - Neural networks help well with regex. The same Chat-GPT, Git-Hub Copilot, Perplexity, Claude. They are also good at explaining the principles and design standards of language packs. With neural nets, I was making fast progress in creating custom language highlighting. Until I got stuck with this weird problem with “indentNextLinePattern”. All in all, I created the bulk of the highlighting in about 3 days. Polishing and working out the details took me another week or so. Without neural networks, I would have been sorting it out for at least a month. And most likely, I would have quickly abandoned it because of the complexity and intricacy of the topic. Many thanks to neural networks. I love them very much.
The solution for me was to add a Height in the ContentPage definitions. For example
<ContentPage ...
HeightRequest="{OnIdiom Desktop=740,
Phone=*}">
I have the same problem, how did you resolve ?
This type of error is often linked to a Java / Gradle / AGP version incompatibility.
Can you run the following command from the root of the Flutter project to do a first check:
flutter analyze --suggestions
According to this answer: https://stackoverflow.com/a/537831/2773515, on your first linked question on your post, I think the best practices that you mentioned is clarified there, so, IMO, no, exceptions didn't manipulating anything based on exception != error "directive". Your validation class will be responsible to check and manipulate entity throwed by your custom exception which is responsible to alert your application that one business rule was violated.
Maybe you should consider not returning data at all on custom exception nor throwing an exception at all, and your validation class returning custom class with manipuated entity and an false state result.
It seems I was able to override it in my local profile with (not specifying the value):
logging.structured.format.console=
I was also looking for "disable" kind of value, but there is nothing like that.
I got claude max ai to help me solve this matter
"I've identified the issue - there's improper PowerShell code in your postgresql.conf file that's preventing PostgreSQL from starting. This happened when you tried to install pgvector.
Follow these steps to fix the problem:
Open a command prompt as Administrator
Run: notepad "C:\Program Files\PostgreSQL\17\data\postgresql.conf"
Find and delete these lines (around line 769)
param($match)
$libraries = $match.Groups[1].Value
if ([string]::IsNullOrWhiteSpace($libraries)) {
return "shared_preload_libraries = 'vector'"
} else {
return "shared_preload_libraries = '$libraries,vector'"
}
Add this line in place of the deleted code:
shared_preload_libraries = 'vector' # (change requires restart)
Save the file and close Notepad
Start the PostgreSQL service:
Using the command: net start postgresql-x64-17
Or alternatively, restart the computer
This will fix the syntax errors in your configuration file and properly enable pgvector.
@Jon Dingman You are a genius!
I am infinitely grateful to you! Your solution saved me hours of work. I searched for a solution myself for a long time, but your contribution was exactly what I needed.
I'm hitting the same wall, any progress ?
This issue has been resolved.
Thank you for your response.
yes the problem is with multiple versions of @types/react. We can just make sure the versions of @types/react are the same while running yarn why @types/react.
SomeGuy just save my life with OpenSSH on windows. I spend almost a day trying to configure public-key auth with no success.
Then I found your post about the enconding of authorized_keys e apply it to administrators_authorized_keys on C:/ProgramData/ssh/ .
I save it on UTF-8 with vscode end finally I could login without password.
OpenSSH publickey login on windows is very obscure... even with DEBUG on the sshd service dont tell why it is denying the key.
Same here. Let me know if anyone could find a solution.
Still issue nowadays on some older printers.
In my case Honeywell PC42T Plus. I followed the idea of Levite (thanks) and used FB command followed by GB command (box) with white thick border. In my case
^GB240,50,50,W
and it works fine.
In newer versions of doxygen the \include special command has an additional option. With the option {doc} the contents of file.txt is treated as if it were directly in the source file.
// --- SourceFile.c ---
/*! \brief description
* \include{doc} file.txt
*/
// --- file.txt ---
// \details description
YOU'RE THE BEST!! This solution saved my day. Thank you very much!
Try to parameterize the Dispatcher to be configurable according to the context. If the code uses Dispatchers.Main or Dispatchers.IO, it would be advisable to allow injecting it as a parameter. In this way, a TestDispatcher can be provided in the tests, ensuring that the execution occurs at the right time.
After you typed in 'jupyter notebook' on Anaconda prompt or command line, at the bottom of the command line you will find an url like the one below. Copy paste the url, that will work.
http://localhost:8888/?token=**------------------------------------**
I haven't got the rep to respond to @Mark Brackett's answer, I had to change a .ToInt32() to .TonInt64() which stopped the OverflowException I was getting with the example code.
This line
var pCurrentSessionInfo = new IntPtr(pSessionInfo.ToInt32() + (NativeMethods.SESSION_INFO_502.SIZE_OF * i));
Becomes
var pCurrentSessionInfo = new IntPtr(pSessionInfo.ToInt64() + (NativeMethods.SESSION_INFO_502.SIZE_OF * i));
In addition to byLazy's answer. You can also do onCompletetion like this.
val f1 = flowOf(1, 2)
val f2 = flowOf(3, 4)
val f = f1.onCompletion { if(it == null) emitAll(f2) }
The repeated change detection with ngModel is a common characteristic of two-way binding with input elements. To reduce it, focus on Using ngModelChange when possible and also try Debouncing method on the date picker selection events.
I know that some compilers use key tool that makes the program have valid licenses and stuff, that may be it
I got this working using Authentication Code Flow with PKCE by:
Reading the code hash from the URL
Calling msalInstance.acquireTokenByCode with
In addition to jammykam's answer, this issue occurs because Sitecore doesn’t automatically remove orphaned renderings when a parent rendering with a nested placeholder is deleted. You need to clean up orphaned renderings in the layout field.
I’ve written a blog post detailing the issue and a solution - check it out here.
column += increase
This actually only increases the iterator. You Actually want to increase the element from your matrix, which would look something like this :
def change_value(my_matrix: list, increase: int):
for row in my_matrix:
for column in row:
my_matrix(row,column) += increase
return my_matrix
matrix = [[1,2,3],[4,5,6],[7,8,9]]
change_value(matrix, 3)
What is your Node version? Have you tried using the latest LTS Node.js release?
If you have a bunch of email or a PST file and you need to redact multiple email at once with an interactive editor, you can also just use our tools here: https://emailtools.hexamail.com/redact
For some users, just updating Android Studio to the newest version seems to fix it.
Simply replacing % with %%, as previously mentioned, did not work for me because I encountered a password authentication failed error.
To resolve this, I encoded the URL first and then replaced %.
Here’s my final code:
_pg_password = quote_plus(os.getenv("DB_PASSWORD", "default")).replace("%", "%%")
I have finally figured out to how to solve what I wanted.
I post this here as it might help someone else in the same situation.
I now install the program with "administrative install mode", but the old software is uninstalled as the logged in user, if that user has previously installed the old software.
[Code]
function PrepareToInstall(var NeedsRestart: Boolean): string;
var
OldAppGuid, SubKeyName: string;
OldAppFound: Boolean;
ResultCode: Integer;
begin
NeedsRestart := false;
result := '';
begin
OldAppGuid := '{xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx}';
SubKeyName := 'SOFTWARE\WOW6432Node\Microsoft\Windows\CurrentVersion\Uninstall\' + OldAppGuid;
OldAppFound := RegKeyExists(HKEY_LOCAL_MACHINE, SubKeyName);
if not OldAppFound then
begin
SubKeyName := 'SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\' + OldAppGuid;
OldAppFound := RegKeyExists(HKEY_LOCAL_MACHINE, SubKeyName);
end;
if OldAppFound then
begin
ExecAsOriginalUser(ExpandConstant('{sys}\msiexec.exe'), // Filename
'/X ' + OldAppGuid + ' /qb- REBOOT=ReallySuppress', // Params
'', // WorkingDir
SW_SHOW, // ShowCmd
ewWaitUntilTerminated, // Wait
ResultCode); // ResultCode
end;
end;
end;
I kept getting this error when calling signtool sign in the post build event of .NET project. Turns out I simply had an older version of signtool.exe and Windows Kits. After updating the Windows SDK (to version 11), it was resolved.
Clone and scan it?
git clone https://github.com/google/flatbuffers -b v23.1.21
grype ./flatbuffers
✔ Indexed file system flatbuffers
✔ Cataloged contents e3c82e6c6bf71c090ee235f26b43aee9b40f120eb4652d8626c7cd714bead4fc
├── ✔ Packages [222 packages]
├── ✔ File digests [17 files]
├── ✔ File metadata [17 locations]
└── ✔ Executables [0 executables]
✔ Scanned for vulnerabilities [13 vulnerability matches]
├── by severity: 0 critical, 7 high, 6 medium, 0 low, 0 negligible
└── by status: 13 fixed, 0 not-fixed, 0 ignored
[0000] WARN no explicit name and version provided for directory source, deriving artifact ID from the given path (which is not ideal)
NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY
braces 3.0.2 3.0.3 npm GHSA-grv7-fg5c-xmjg High
cross-spawn 7.0.3 7.0.5 npm GHSA-3xgq-45jj-v275 High
esbuild 0.16.4 0.25.0 npm GHSA-67mh-4wv8-2f99 Medium
google.golang.org/grpc v1.35.0 1.56.3 go-module GHSA-m425-mq94-257g High
google.golang.org/grpc v1.35.0 1.56.3 go-module GHSA-qppj-fm5r-hxr3 Medium
google.golang.org/grpc v1.39.0-dev 1.56.3 go-module GHSA-m425-mq94-257g High
google.golang.org/grpc v1.39.0-dev 1.56.3 go-module GHSA-qppj-fm5r-hxr3 Medium
micromatch 4.0.5 4.0.8 npm GHSA-952p-6rrq-rcjv Medium
semver 5.6.0 5.7.2 npm GHSA-c2qf-rxjj-qqgw High
semver 7.3.7 7.5.2 npm GHSA-c2qf-rxjj-qqgw High
word-wrap 1.2.3 1.2.4 npm GHSA-j8xg-fqg3-53r7 Medium
wget https://repo1.maven.org/maven2/org/rogach/scallop_2.13/5.1.0/scallop_2.13-5.1.0-sources.jar
grype ./scallop_2.13-5.1.0-sources.jar
✔ Indexed file system ./scallop_2.13-5.1.0-sources.jar
✔ Cataloged contents 79a24a3a5c54dd926ea9b41cc1258e58e395f25141c518b1c14afb869cb0bb9d
├── ✔ Packages [1 packages]
├── ✔ File digests [1 files]
├── ✔ File metadata [1 locations]
└── ✔ Executables [0 executables]
✔ Scanned for vulnerabilities [0 vulnerability matches]
├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible
└── by status: 0 fixed, 0 not-fixed, 0 ignored
No vulnerabilities found
Thanks to CDP1802 contribution. I've followed and used his code as a template. I have some constrains. It's part of an small accountant system where both income and expenses are displayed on same sheet. Therefore I couldn't test against 1 or first Row, but I said if the header/total ain't shown up within 3 rows, the I will break. My test criteria if it's a header row, all columns has to have content. My test criteria for total row is that it different from a totally empty row. Some of the rows are not completely empty outside of the "Range" of the table, therefore I a little bit stubborn :-)) and use only first column to last column of the table and offset that with a certain amount of rows like this
Private Function CorrectRangeForHeaderRows(rRng As Range) As Range
'rRng.Select
Dim tmpRng As Range
Set tmpRng = rRng.Range(Cells(1, 1), Cells(1, rRng.Columns.Count))
'tmpRng.Select
'Loop to a full Header row
Dim lCor As Long: lCor = 1
Do While WorksheetFunction.CountA(tmpRng.Offset(-lCor)) < rRng.Columns.Count
lCor = lCor + 1
If rRng.Row - lCor <= 1 Then
Exit Do
End If
Loop
If rRng.Row - lCor > 1 Then
Set rRng = rRng.Offset(-lCor).Resize(rRng.Rows.Count + lCor)
End If
'rRng.Select
CorrectRangeForHeaderRows = rRng
End Function
Private Function CorrectForTotalRow(rRng As Range) As Range
'rRng.Select
Dim tmpRng As Range
Set tmpRng = rRng.Range(Cells(1, 1), Cells(1, rRng.Columns.Count))
'tmpRng.Select
Dim lCor As Long: lCor = rRng.Rows.Count
Do While WorksheetFunction.CountA(tmpRng.Offset(lCor)) = 0
lCor = lCor + 1
If lCor > rRng.Rows.Count + 2 Then
Exit Do
End If
Loop
If lCor <= rRng.Rows.Count + 2 Then
Set rRng = rRng.Resize(lCor + 1)
End If
'rRng.Select
CorrectForTotalRow = rRng
End Function
I ran into the same issue today. I realised that the start command did not generate the 'blocks-manifest.php' as it should. My workaround is:- copy the blocks-manifest.php (from build folder) into 'src' folder, so webpack will also build this file into the 'build' folder (for Wp's reference) when using 'run start' command during development. And delete this file when ready to build.
Configure your dashboard to only generate relative urls.
public function configureDashboard(): Dashboard
{
return parent::configureDashboard()
->setTitle('Backoffice - ')
->generateRelativeUrls();
}
1)Restart VS Code (sometimes it will solve your issue)
2)Update VS Code:-Go to "Help" -> "Check for Updates" ( check for updates, if you can do the update in your vs code it may also solve your issue which you mention above)
3)Uninstalling and Reinstalling Specific Extensions which is needed (this is an common idea which you can try to uninstall unwanted extensions and after restart the vs code, then try to reinstall your specific extension which you need)
4)Uninstall your current version of vs code and Reinstall the newer version of vs code. (This is not a best idea for all time, if you don't have any other choice you can try it. )
you play the sound and immediatly destroy the object with queue_free
you have to wait until the sound is done playing and then call queue_free or let the sound play form another scene.
use the signal finished() of the audiostreamplayer2d then call queue_free
or
use the property playing of audiostreamplayer2d to check is the sound is done playing
It's working here with your configuration:
I just ran docker build . --platform linux/amd64 I got an image on my ARM Mac.
docker image list
REPOSITORY TAG. IMAGE ID CREATED SIZE
<none> <none> 8394447e8084 4 minutes ago 59.6MB
syft scan 8394447e8084 --scope all-layers
✔ Loaded image 8394447e8084
✔ Parsed image sha256:8394447e80846d52d7047063a7b5c47ff2a1795e5baeda03d3fb6362a99f9f94
✔ Cataloged contents 655512525c2ef2fe56e4890d9acd5852ea5729901fb1a99abcccd88c6bccae60
├── ✔ Packages [4 packages]
├── ✔ File digests [943 files]
├── ✔ File metadata [943 locations]
└── ✔ Executables [2 executables]
NAME VERSION TYPE
base-files 12.4+deb12u10 deb
netbase 6.4 deb
redis 7.4.2 binary
tzdata 2025a-0+deb12u1 deb
Are you using an old version of Syft? The latest is v1.21.0.
Do you have a syft configuration file that is overriding the defaults? (I am not)
I suggest reading the answers to this question . I was able to solve the problem by:
install.packages("installr")
install.packages("magick")
@Matt Lang thanks for this. I spent 3 sessions with support and they never gave me this HOST. Saved me a few hours for sure
Programmatic revocation of delegated permissions (full or partial) without admin may not work. Direct users to revoke access manually via Microsoft portals, as APIs require elevated permissions not available in your scenario. Even if you log users out and clear the token, this doesn't revoke permission.
In my case I was not cloning the repository to the build agent in my pipeline.
Adding - checkout: self at the very beginning of the job fixed my issue.
Special thanks to the answer in the following: Azure DevOps Pipeline Terraform Init fail
Use the NULLIF(value, 0) function, where value is a potential divisor. If it equals zero, NULL will be returned; otherwise, the correct division result will be returned.
Example:
SELECT 30 / NULLIF(0, 0)
\[NULL\]
SELECT 30 / NULLIF(5, 0)
6
It's late, but you should generate the trace ID for each request on the client, not pass it back to the client.
With 10 (or even 11) it was working fine, seems to be an issue since 12.
The following code should illustrate how to achieve two ComboBox widgets where the choices of the second depend on the current value of the first. This makes use of the changed signal of the first ComboBox to call the reset_choices of the second ComboBox to update its choices whenever the value of the first ComboBox got changed. The possible choices are saved as a dict.
# Dictionary to save choice dependencies.
choices = {
"Choice 1": ["First A", "First B", "First C"],
"Choice 2": ["Second A", "Second B", "Second C", "Second D"],
"Choice 3": ["Only one choice"],
}
# The choices function receives the `ComboBox`
# widget as argument.
def get_second_choice(gui):
# before final initialization the parent
# of our `ComboBox` widget is None.
if gui.parent is not None:
return choices[gui.parent.box1.value]
else:
return []
@magicgui(
box1={"widget_type": "ComboBox", "choices": list(choices.keys())},
box2={"widget_type": "ComboBox", "choices": get_second_choice},
)
def widget(box1, box2, viewer: napari.Viewer) -> None:
return f"{box1} - {box2}"
# Changes to box1 should result in resetting
# choices in box2.
widget.box1.changed.connect(widget.box2.reset_choices)
I have one super solution to download invoices from SAP through VF03
If you want to convert markdown to slack supported markdown in python, you can use: https://pypi.org/project/slackify-markdown/
Disclaimer: I am developer of this library
M.
Awesome..!! All that 'noise' in Scrapy has been driving me nuts and this solution works a treat! Thank you.
Looks that time calculation of minuteOfDay isn't correct
time % 1 always = 0;
Probably should be:
// const minuteOfDay = this.minutesInDay * (time % 1);
const minuteOfDay = time % this.minutesInDay;
Code sandbox:
Found a Post who already answered a similar question, hope it helps:
Django-Allauth equivalent for Django Ninja?
Second question i cant answer, but ninja_jwt may be usefull
As @novelistparty noted, you can type format in the lldb prompt. If you want to have these settings saved in XCode, you can create an ~/.lldbinit file with this same setting, along with whatever other settings in the file:
type format add -f decimal uint8_t
This was a bug that's been fixed in Flutter 3.29.1. Updating to this or newer Flutter version should do the trick.
[CP][Impeller] Fix text glitch when returning to foreground.
Turns out my client was using Project Permission Mode, which there is osme documentation about stating that could've been the case here.
At the time I didn't look into it deeper as I could not see the option to change our environment to Project Permission Mode, even though I was a Project Server admin. This wrongly led me to believe it was depricated, and I kept trying to solve it using Sharepoint Permission Mode approaches.
Having access to my clients environment I could allow the needed permissions to the user created for the integration.
In my case I was missing the
[keyring.backend] Loading Google Auth
line in publish -vvv output and what helped me was
poetry add --dev poetry
poetry run poetry publish ....
This doesn't seem relevant for the initial question, but may help others having similar problem.
To me, this is a two-part definition of the term "Authorization". The classic authorization is when an application is looking at the logged in users permissions and deciding what the user can do. The new way, Oauth2-way is from the user perspective. The user is authorizing the application to use the user data.
Classic: Application is authorizing the user.
Oauth2: the user is authorizing the application.
So by that definition there is no authentication in Oauth2, but rather oauth2 is relying on other parties to do the authentication, I.e login with google etc. Google is authenticating, oauth2-protocol trusts the other identity providers. So rather than having an authentication step, there is only a "redirect to Identity providers"-step for authentication.